Wednesday, December 31, 2025

Artificial Intelligence in 2026: A Hard Rain’s A‐Gonna Fall

For many reasons, AI may be heading for a storm.

This was a great year for the technology.  It absorbed tens of billions of dollars in spending, in the process accounting for, according to one estimate, fully half of the nation’s Gross Domestic Product increase.  The NASDAQ index, heavy on technology stocks and reacting greatly to AI events, rose 20% during the 52 weeks ending on December 29th’s early morning.  A large string of niche successes, from health care to robotics to shopping aids, have put AI in the news and in people’s lives.  Companies have generally done well and acted in good faith when problems with their products have materialized.  Press coverage was copious and predominantly positive, with a big drop in the number of stories about how and whether it endangers humankind. 

Yet in some ways, 2025 was more of a getting-into-position year than one of overwhelming success.  The most profitable AI-related companies, starting with Nvidia, were not producing AI tools but providing chips and other resources to those that are.  That firm’s market capitalization, along with that of others, reflects mostly expected future income, dwarfing how much it has had so far.

There are unresolved problems looming.  Many communities have recently said they do not want data centers, which have pushed up water and electricity prices, the latter nationwide.  Chinese competition, from an unfree state which need not reveal the practices it fosters and condones, greatly strengthened this year.  The American people bifurcated, into one group containing about the bottom two-thirds of families by earnings and another with more, and AI has helped the first cohort little while hurting them proportionally more with the higher utility rates.  A variety of lawsuits against these corporations are in progress and have begun to be resolved, starting with the first of many large ones from those owning the rights to books and other material used by AI model builders without authorization.  The emergence of “artificial general intelligence,” not pegged to specific tasks, even in a recently shortened estimate, is expected no sooner than 2029.

What does all that mean?  First, what was accomplished with AI this year does not require huge data centers for improved versions, as it was with largely limited if well-focused applications.  That will also be true for the vast majority of 2026 successes.  Second, if current market valuations are to be maintained, firms selling the software itself will need to start getting amounts consistent with the cost of the chips it requires.  Third, it needs to be perceived as benefiting most Americans, else it may be taken to symbolize the richer-poorer split above.  Fourth, we want to see major-publication articles with titles and contents more positive and less demanding than “A 1 Percent Solution to the Looming A.I. Job Apocalypse” (Sal Khan, December 27th) and “An Anti-A.I. Movement Is Coming. Which Party Will Lead It?” (Michelle Goldberg, December 29th, both in the New York Times).  Fifth, it is time for the industry to integrate its opposite communication pairs, of potential and present, 2022 views and 2025 views, and niches and humanity-shaking feats.

For 2026, I predict continuing AI specific application success, but problems of financing, earnings, and public support causing industry concern and even panic.  It may be time for some companies to expensively exit the scene, which many will interpret as a crash or bubble.  Data center construction will level off near the middle of the year and be greatly reduced by Christmas.  Overall, artificial intelligence will end shaky, but in 2027 we will learn with much more accuracy where it is going – and not going.  For now, the people are many and their hands are all empty – as always, we pays our money and takes our chances.

Wednesday, December 24, 2025

Artificial Intelligence Regulation Since April, and Why It Won’t Be Settled Soon

Not a lot has changed this year in the laws around AI, but we’ve spent eight months getting into position for what could be a big year for that. 

First, a look at “Where the legal battle stands around copyright and AI training” (Patrick Kulp, Emerging Tech Brew, April 21st).  The short answer is unsettled, as although the Supreme Court will probably eventually hear a related case, “intellectual property lawyers say there aren’t yet many signs of where courts will land.”  As Anthropic seems to have used at least one of my books without permission, I was offered membership in a group to be compensated in a “$1.5 Billion Proposed Class Action Settlement.”  This may go through, and there may be similar resolutions offered by other AI companies.

Next, “Why the A.I. Race Could be Upended by a Judge’s Decision on Google” (David McCabe, The New York Times, May 1st).  Although “a federal judge issued a landmark ruling last year, saying that Google had become a monopolist in internet search,” that did not resolve whether it “could use its search monopoly to become the dominant player in A.I.”  A hearing had started in April 2025 to settle that issue; at its conclusion four months later, in the views of Kate Brennan of Tech Policy Press, the “Decision in US vs. Google Gets it Wrong on Generative AI” (September 11th).  The judge of the hearing considered AI, unlike search engines, to be a competitive field, and rejected “many of the Department of Justice’s bold, structural remedies to unseat Google’s search monopoly position.”  That could be a problem, as “Google maintains control over key structural chokepoints, from AI infrastructure to pathways to the consumer.”  This conflict, though, may not be completely settled, as the extent to which that company can absorb more of the AI field with its Gemini product is unknown.

In “Trump Wants to Let A.I. Run Wild.  This Might Stop Him” (Anu Bradford, The New York Times, August 18th), we see that our presidential administration produced an “A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development.”  The piece says that while “Washington may be able to eliminate the rules of the road at home… it can’t do so for the rest of the world.”  That includes the European Union, which follows its “A.I. Act,” which “establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked.”  If Europe “will take a leading role in shaping the technology of the future” by “standing firm,” it could effectively limit AI companies around the world.

From there, “Status of statutes” (Patrick Kulp, Jordyn Grzelewski, and Annie Sanders, Tech Brew, October 3rd) told us that that week California passed “major AI legislation… establishing some of the country’s strongest safety regulations” there, which “will require developers of the most advanced AI models to publish more details about safety steps taken in development and create more protections for whistleblowers at AI companies.”  Comments, both ways, were that the law “is a good start,” “doesn’t necessarily go far enough,” and “is too focused on large companies.”  It may, indeed, be changed, and other states considering such efforts will learn from California’s experience.

Weeks later, “N.Y. Law Could Set Stage for A.I. Regulation’s Next ‘Big Battleground’” (Tim Balk, The New York Times, November 29th).  It “became the first state to enact a law targeting a practice, typically called personalized pricing or surveillance pricing, in which retailers use artificial intelligence and customers’ personal data to set prices online.”  Companies using such in New York will now need to post “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”  As of article time, there were “bills pending in at least 10 states that would either ban personalized pricing outright or require disclosures.”  Expect more.

After a “federal attempt to halt state AI regulations,” “State-level AI rules survive – for now – as Senate sinks moratorium despite White House pressure” (Alex Miller, Fox News, December 6th).  Although “the issue of a blanket AI moratorium, which would have halted states from crafting their own AI regulations, was thought to have been put to bed over the summer,” it “was again revived by House Republicans.”  Would this be constitutional, as AI is not mentioned as an area to be overseen by the federal government?  Or would it just be another power grab?

The latest article here, fittingly, is “Fox News Poll:  Voters say go slow on AI development – but don’t know who should steer” (Victoria Balara, Fox News, December 18th).  “Eight in ten voters favor a careful approach to developing AI,” but “voters are divided over who should oversee the new technology, splitting between the tech industry itself (28%), state governments (26%), and Congress (24%).” Additionally, 11% “think the president should regulate it… while about 1 in 10 don’t think it should be regulated at all.”  That points up how contentious the artificial intelligence regulation issue is – and tells us that, urgent need or not, it may take longer to be resolved than we may think.  We will do what we can, but once again it won’t be easy.

Merry Christmas, happy Hanukkah, happy Kwanzaa, and happy new year.  I’ll see you again on January 2nd.

Tuesday, December 16, 2025

November’s Employment Data Sluggish and Worse – AJSN Shows Latent Demand Up 110,000 to 17 Million

Here we are with the first Bureau of Labor Statistics Employment Summary since last month, the first data in two, and the first timely information in three.  Was it worth the wait?

The headline number, the count of net new nonfarm payroll positions, exceeded its 45,000 estimate but not by much – 64,000.  Seasonally unadjusted unemployment stayed at 4.3% but the adjusted figure gained 0.2% to 4.6%, since September.  The unadjusted number of unemployed rose 200,000 to 7.8 million, of which 1.9 million were designated as long-term or out for 27 weeks or longer, up 100,000.  The labor force participation rate gained 0.1% to 62.5%, but the employment-population ratio, best showing how common it is for Americans to be working, lost the same amount to 59.6%.  Average private nonfarm payroll earnings grew 19 cents, less than inflation, since September, to reach $36.86.  The alarming change was to the count of those working part-time for economic reasons, or holding on to less than full-time opportunities while looking thus far unsuccessfully for full-time ones.  That soared 900,000, to 5.5 million.

The American Job Shortage Number or AJSN, which shows how many additional positions could be quickly filled if all knew they would be easy to get, increased modestly to the following:

The largest change came from those discouraged, adding almost 130,000 to the metric, followed by unemployment itself which contributed 69,000 more.  The main subtraction came from people wanting to work but not searching for it for a year or more, which took away 62,000.  Thirty-nine point one percent of the AJSN came from those officially jobless, up 0.2% from September.

Compared with November 2024, the AJSN rose 1.2 million, half of that from official unemployment, and most of the rest from those discouraged and those not looking for at least a year. 

What does all that add up to?  Adding the modest increases in unadjusted unemployment (+172,000), those not in the labor force (+156,000), and those claiming no interest in work (+175,000) does not change anything much.  It was a torpid month or two, with few positive outcomes.  The fair number of new jobs was offset by what we hope is not a new level for those working part-time for economic reasons.  We also don’t like the rising unemployment rate, the highest in four years and in need of more front-line attention.  Without knowing what he did in October, we saw the turtle stay just where he was the month after.

Friday, December 12, 2025

Two Months of Driverless Cars, With Progress and Cogent Observations

This area has been heating up lately.  It’s been a good but limited year for autonomous vehicles, with those offering them mainly building on their robotaxi success.  What’s been happening?

On that, “Way more” (The Economist, October 4th) discussed “the peculiar economics of self-driving taxis,” claiming that “the rise of autonomy has played out in two different ways.  First it has raised overall taxi demand in San Francisco.  Second, it has catered to a lucrative corner of the market.”  The number of rides in cabs with drivers stayed the same, and from 2023 to 2024 the count of people working in “taxi and limousine service” increased, 7%, leading Lyft’s CEO to say that autonomous taxis will “actually expand the market.” 

Moving along, “Could a driverless car deliver your next DoorDash?  New collab announced” (Michelle Del Rey, USA Today, October 16th).  That company is combining with Waymo, to “launch the testing phase of an autonomous delivery service in the Phoenix metro area, with plans to expand it more broadly this year.”  Customers, if they are “in an eligible area,” can use the Waymo app’s “Autonomous Delivery Platform.”  As well as human “dashers,” DoorDash is already also making at least some deliveries with robots and drones.

On the other size end, an “AI truck system matches top human drivers in massive safety showdown with perfect scores” (Kurt Knutsson, Fox News, October 29th).  Autonomous system Kodiak Driver’s rating, described as 98 on a 1-100 scale on the industry assessment VERA, “placed it beside the safest human fleets.”  The self-driving trucks have “advanced monitoring and hazard detection systems,” and have eliminated many human problems, such as “distraction, fatigue and delayed reaction.”  Nothing was provided, though, on how many of these trucks are running now, and whether they are being used for true production – but see three paragraphs below for one modest data point.

Now we can expect “Waymo to launch robotaxi service in Las Vegas, San Diego and Detroit in 2026” (Akash Sriram, USA Today, November 4th).  The first two cities aren’t surprising, but can such vehicles deal with snow?  “In Detroit, the company said its winter-weather testing in Michigan’s Upper Peninsula has strengthened its ability to operate year-round.”  We will see if that place can really join “Phoenix, San Francisco, Los Angeles, and Austin,” where it “has completed more than 10 million trips.”

Miami is not in that group, but there, a “Sheriff’s office tests America’s first self-driving police SUV” (Kurt Knutsson, Fox News, November 6th).  This “bold experiment” is a “year-long pilot program” of “the Police Unmanned Ground Vehicle Patrol Partner, or PUG,” which “is packed with high-tech features” including interfaces “with police databases, license plate readers and crime analytics software in real time,” and “can drive itself, detect suspicious activity through artificial intelligence-powered cameras and even deploy drones for aerial surveillance.”  A massive, if scary, potential help for law enforcement forces.

Are “Self-driving cars still out of reach despite years of industry promises” (Jackie Charniga, USA Today, November 25th)?  Although “driverless semitrucks have traveled more than 1,000 miles hauling cargo between Dallas and Houston,” and robotaxis are established as above, “the unmanned vehicles circulating on American highways and side streets are a fraction of what executives promised in the giddy early days.”  We know that, though, and progress, on a more specialized and certainly slower track, is still real.  Don’t bet anything you don’t want to lose against improvement continuing indefinitely.

On the other hand, autonomous vehicles are still embarrassing themselves.  We now have “US investigating Waymo after footage captures self-driving cars illegally moving past school buses in Texas” (Bonny Cho, Fox Business, December 4th).  Driverless technology has struggled mightily with understanding on a detailed level how human drivers think, and have not been able to quantify some great pieces of that, but why weren’t school buses, with telltale flashing lights and capability of being tagged in some way, long since identified and understood?  Were there none of them in the mile-square testing grounds where base autonomous software was developed?  This is the kind of thing which causes people to be overly fearful, and, if there are many more problems remaining at this stage, that’s justified.  I hope there are no more humiliations as low-level as this yet to emerge.

Ever since I first wrote about driverless technology, close to ten years ago, I have been making points about how beneficial it would be.  Avoiding the tens of thousands of annual deaths caused by human driver error was the main benefit, followed by higher general prosperity and allowing easier transportation of older children, those impaired, and others unable to drive.  As with when our current cars became the norm, we would not know all of self-driving’s effects, but many, such as reduced smoking as people would eventually not need to stop at gas stations where many now buy cigarettes, would be both probable and valuable.  I have been disappointed by overreactions to autonomous vehicles’ tiny amounts of fatalities, governmental unwillingness to allow the technology to progress, and general lacks of will and ability to see how many lives could be saved, but there has lately been, in two places, at least a small advance.

The first opinion piece was “Auto injuries are my job.  I want Waymo to put me out of work” (Marc Lamber, USA Today, November 21st).  The author, with “a 34-year career as a plaintiff personal injury lawyer,” said his “calls have been heartbreakingly familiar:  a parent and spouse is paralyzed because someone was texting; a pedestrian on a sidewalk is killed because a driver had “just two drinks”; a family is shattered by speeding, fatigue or road rage.”  He pointed out that “autonomous driving technology doesn’t get drunk, distracted, tired or tempted to speed,” and that “a rare autonomous vehicle mistake dominates headlines while the daily toll of human driving error goes underreported.”  He mentioned that Waymo, “across 96 million miles without a human driver,” had “91% fewer serious injury crashes, 79% less airbag deployment crashes,” and “92% fewer injury-resulting pedestrian collisions,” along with “89% less injury-causing motorcycle collisions and 78% fewer injury-related cyclist crashes.”  Overall, “that is not perfection.  That is progress worth protecting.”

The second piece, published in the New York Times website on December 2nd and in the Sunday print edition December 7th, by Jonathan Slotkin, a neurosurgeon, was, in the latter, “The Human Driver Is a Failed Experiment.”  He made many of the same points Lamber did, adding that “more than 39,000 Americans died in motor vehicle crashes last year,” of which “the combined economic and quality-of-life toll exceeds $1 trillion annually, more than the entire U.S. military or Medicare budget.”  He said that “if 30 percent of cars were fully automated, it might prevent 40 percent of crashes,” and that “insurance markets will accelerate this transition, as premiums start to favor autonomous vehicles,” but “many cities are erecting roadblocks,” and “in a future where manual driving becomes uncommon, perhaps even quaint, like riding horses is today… we no longer accept thousands of deaths and tens of thousands of broken spines as the price of mobility.”  Ending, “it’s time to stop treating this like a tech moonshot and start treating it like a public health intervention.” 

Do we want this outcome?  If not, why not?

Friday, December 5, 2025

Artificial Intelligence Going Right Means No Total Crash is Possible

There’s been ever-increasing talk about an “AI bubble,” perhaps meaning a business shakeout but to some ways of thinking, concern that it will all prove illusory.  It may well fall short of being a massive, overarching technological change, but over 2025, and especially over the past three months, it has produced a steady flow of valuable applications.  Here are some worthy of your attention.

To stanch a problem that had been causing deaths and threatened huge lawsuit settlements, we saw as “OpenAI announces measures to protect teens using ChatGPT” (Stephen Sorace, Fox Business, September 16th).  These “strengthened protections for teens will allow parents to link their ChatGPT account with their teen’s account, control how ChatGPT responds to their teen with age-appropriate model behavior rules and manage which features to disable, including memory and chat history.”  It is now in place, and is at least a commendable start.

On another gigantic corporate side, “Elon Musk Gambles on Sexy A.I. Companions” (Kate Conger, The New York Times, October 6th).  And they are certainly trying to be.  Musk’s firm xAI offered “cartoonish personas” which “resemble anime characters and offer a gamelike function:  As users progress through “levels” of conversation, they unlock more raunchy content, like the ability to strip (them) down to lacy lingerie.” They would also talk about sex, and have kindled romantic, as opposed to pornographic, user interest.  As for the latter, “ChatGPT to allow ‘erotica for verified adults,’ Altman says” (Anders Hagstrom, Fox Business, October 15th).  Their CEO Sam claimed he implemented this capability partly as a response to successfully limiting teens as above, and expected that “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more.”

In a rather unrelated achievement, “Researchers create revolutionary AI fabric that predicts road damage before it happens” (Kurt Knutsson, Fox News, October 15th).  “Researchers at Germany’s Fraunhofer Institute have developed a fabric embedded with sensors and AI algorithms that can monitor road conditions from beneath the surface,” which would “make costly, disruptive road repairs far more efficient and sustainable” by assessing “cracks and wear in the layers below the asphalt.”  The fabric “continuously collects data,” and “a connected unit on the roadside stores and transmits this data to an AI system that analyzes it for early warning signs.”  Seems conceptually solid, and is now being tested.

If you want more than just hot other-sex representations, now “People are talking with ‘AI Jesus.’  But do they have a prayer?” (Scott Gunn, Fox News, October 26th).  The author named concerns with that app, some from his Christian perspective, such as “your conversation might take a strange turn when “Jesus” says something that’s just not true or makes up a Bible verse or reference that doesn’t exist,” and that using it constitutes “replacing the living and true God with a false God.” He also noted that “people in church… will answer your questions and support you through uncertain times.”  This program could be used as an attempt to learn Christian teachings, and end up helping people “grow in faith and love,” but, per Gunn, it’s no substitute for the old-fashioned means.

Medical-related AI uses have been growing exponentially, and, in the October 30th New York Times, Simar Bajaj gave us “5 Tips When Consulting ‘Dr.’ ChatGPT.”  Although “ChatGPT can pass medical licensing exams and solve clinical cases more accurately than humans can,” and “are great at creating a list of questions to ask your doctor, simplifying jargon in medical records and walking you through your diagnosis or treatment plan,” they “are also notorious for making things up, and their faulty medical advice seems to have also caused real harm.”  The pieces of advice are “practice when the stakes are low,” “share context – within reason,” “check in during long chats” by asking it to summarize what it “knows,” “invite more questions,” and “pit your chatbot against itself” by requesting and verifying sources. 

Back to romantic uses with “How A.I. Is Transforming Dating Apps” (Eli Tan, The New York Times, November 3rd).  The area of online dating, per a mountain of articles and anecdotal reports, is now a disaster zone of dissatisfaction, so the appearance of “artificial intelligence matchmakers” must at least have potential.  People are entering information about what kind of partner they want, the tool distills them down to one candidate, and the user pays individually for that.  I don’t think this is really anything new, just an adjustment from providing a smaller number of recommendations to just one, but perceptions are powerful, and sending $25 for a crack at meeting “the one” may turn out to have great emotional, and even logistical, appeal.

Another personal thing AI has been doing is counseling.  But “Are A.I. Therapy Chatbots Safe to Use?” (Cade Metz, The New York Times, November 6th).  The question here is not whether the products are useful, but if they “should be regulated as medical devices.”  The day this article was published, as “how well therapy chatbots work is unclear,” “the Food and Drug Administration held its first public hearing to explore that issue.”  At the least, such programs will be usable only unofficially for psychiatric counseling; at best, certain ones will be formally, and perhaps legally, approved.

The other side of one of the technology’s most-established setting came out in “I’m a Professor.  A.I. Has Changed My Classroom, but Not for the Worse” (Carlo Rotella, also in the Times, November 25th).  The author, a Boston College English instructor, related how his students “want to be capable humans” and “independent thinkers,” and “the A.I. apocalypse that was expected to arrive in full force in higher education has not come to pass just yet.”  He had told his learners that “reading is thinking and writing is thinking,” “using A.I. to do your thinking for you is like joining the track team and doing your laps on an electric scooter,” and “you’re paying $5 a minute for college classes; don’t spend your time here practicing to be replaceable by A.I.”  Those things, and the “three main elements” of “an A.I.- resistant English course,” “pen-and-paper and oral testing, teaching the process of writing rather than just assigning papers, and greater emphasis on what happens in the classroom” have seen this contributor through well.

In the same publication on the same day, Gabe Castro-Root asked us “What Is Agentic A.I., and Would You Trust It to Book a Flight?”  Although not ready now, its developers claim it “will be able to find and pay for reservations with limited human involvement,” once the customer provides his or her credit card data and “parameters like dates and a price range for their travel plans.”  For now, agentic A.I. can “offer users a much finer level of detail than searches using generative tools.”  One study found that earlier this year, “just 2 percent of travelers were ready to give A.I. autonomy to book or modify plans after receiving human guidance.”  If hallucinated flights, hotels, and availability prove to be a problem, that may not get much higher.

Another not here now but perhaps on the way is “Another Use for A.I. – Talking to Whales” (David Gruber, again in the Times, November 30th).  Although the hard part of understanding whale sounds is only in the future, AI has proved handy in anticipating “word patterns” as it does with human language, and can also “accurately predict” the clicks they make “while socializing,” “the whale’s vocal clan, and the individual whale with over 90 percent accuracy.”  We don’t know how long it will take for humans to decode this information, but AI is helping to clear conceptual problems in advance.

Once more in the November 25th New York Times was the revelation that “A.I. Can Do More of Your Shopping This Holiday Season” (Natalie Rocha and Kailyn Rhone).  Firms providing “chatbots that act as conversational stylists and shopping assistants” include Ralph Lauren, Target, and Walmart.  Customers with ChatGPT can use an “instant checkout feature” so they “can buy items from stores such as Etsy without leaving the chat.”  Google’s product “can call local stores to check if an item is in stock,” and “Amazon rolled out an A.I. feature that tracks price drops and automatically buys an item if it falls within someone’s budget.”  While “many of the A.I. tools are still experimental and unproven,” per a Harris poll “roughly 42 percent of shoppers are already using A.I. tools for their holiday shopping.” 

And so it is going.  Most of these innovations don’t require more expensively expanded large language models.  Why would people stop using them?  Why would companies stop improving them in other ways?  They are here to stay, and so, it must be, is artificial intelligence.

Wednesday, November 26, 2025

September’s Jobs Report – Months Ago Now, with Mild Changes – AJSN Now 16.9 Million

Between the government shutdown and my own outage, we’re about eight weeks later for this one than we usually are, but it still has something meaningful to say.  What?

The number of net new nonfarm payroll positions in the Bureau of Labor Statistics Employment Situation Summary came in at 119,000, not huge but strongly positive and exceeding a few estimates.  Seasonally adjusted unemployment was 4.4%, up 0.1%, and the unadjusted variety, reflecting work increases in September, fell from 4.5% to 4.3%, with the unadjusted count of those with jobs up 606,000, just more than last time’s loss, similarly moving to 163,894,000.  The two measures showing how many Americans are working or only one step away, the employment-population ratio and the labor force participation rate, each gained 0.1% to 59.7% and 62.4%.  The count of those working part-time for economic reasons, or looking thus far unsuccessfully for full-time labor while keeping at least one part-time proposition, was down 100,000 to 4.8 million, as was the number of people officially unemployed for 27 weeks or longer, reaching 1.8 million.  Average private hourly nonfarm payroll earnings rose 14 cents, a bit more than inflation, to $36.67.

The American Job Shortage Number or AJSN, the Royal Flush Press statistic showing how many additional positions could be quickly filled if all knew they would be easy to get, lost 844,000, mostly seasonally, to get to the following:

 

Less than half of the drop was from lower unemployment – more was from a large cut in those reporting they wanted to work but had not looked for it during the previous year.  The other factors changed little.  Year-over-year, the AJSN increased 316,000, with unemployment up since September 2024 and those not wanting work adding 115,000.  The share of the AJSN from official joblessness shrank 0.3% to 38.9%.

What happened this time?  Not a great deal, and barely better than neutral.  Those not interested in work rose 750,000, which with August’s 860,000 meant over 1.6 million over two months, which is a lot.  Otherwise, everything reasonably hung on.  There will be no October AJSN or Employment Situation Summary, but you can expect November’s writeup to appear here on the next jobs report’s December 16th release date.  For now, the turtle managed only a tiny step forward.

Thursday, November 13, 2025

Artificial Intelligence Going Wrong: Eleven Weeks of Real or Questionable Problems

Somewhere between AI’s accomplishments and its postulated threats to humanity are things with it that have gone wrong, and concerns that something might.  Here are nine – almost one per week since the end of August.

A cuddly danger?  In “Experts warn AI stuffed animals could ‘fundamentally change’ human brain wiring in kids” (Fox News, August 31st), Kurt Knutsson reported that “pediatric experts warn these toys could trade human connection for machine conversation.”  Although television has been doing that for generations, some think that with AI playthings, “kids may learn to trust machines more than people,” which could damage “how kids build empathy, learn to question, and develop critical thinking.”  All of this is possible, but speculative, and nothing in this piece convinced me AI toys’ effect would be much more profound than TV’s.

A good if preliminary company reaction was the subject of “OpenAI rolls out ChatGPT parental controls with help of mental health experts” (Rachel Wolf, Fox Business, September 2nd).  In response to a ChatGPT-facilitated suicide earlier this year, “over the next 120 days… parents will be able to link their accounts with their teens’ accounts, control how ChatGPT responds to their teen, manage memory and chat history features and receive notifications if their child is using the technology in a moment of acute distress.”  That will be valuable from the beginning, and will improve from there.

On another problem front, “Teen sues AI tool maker over fake nude images” (Kurt Knutsson, Fox News, October 25th).  The defendant, AI/Robotics Venture Strategy 3 Ltd., makes a product named ClothOff, which can turn a photo into a simulated nude, keeping the original face.  A plaintiff’s classmate did that to one of hers, shared it, and “the fake image quickly spread through group chats and social media.”  As of the article’s press time, “more than 45 states have passed or proposed laws to make deepfakes without consent a crime,” and “in New Jersey,” where this teenager was living, “creating or sharing deceptive AI media can lead to prison time and fines.”  Still, “legal experts say this case could set a national precedent, as “judges must decide whether AI developers are responsible when people misuse their tools” and “need to consider whether the software itself can be an instrument of harm.”  The legal focus here may need to be on sharing such things, not just creating or possessing them, which will prove to be impossible to stop.

In a Maryland high school, “Police swarm student after AI security system mistakes bag of chips for gun” (Bonny Chu, Fox News, October 26th).  Oops!  This was perpetrated by “an artificial intelligence gun detection system,” which ended up “leaving officials and students shaken,” as, per the student, “police showed up, like eight cop cars, and they all came out with guns pointed.”  I advise IT tool companies to do their beta testing in their labs, not in live high school parking lots.

Was the action taken by the firm in the third paragraph above sufficient?  No, Steven Adler said, in “I Worked at OpenAI.  It’s Not Doing Enough to Protect People” (The New York Times, October 28th).  Although the company “ultimately prohibited (its) models from being used for erotic purposes,” and its CEO claimed about the parental-control feature above that it “had been able to “mitigate” these issues,” per Adler it “has a history of paying too little attention to established risks,” and that it needs to use “sycophancy tests” and “commit to a consistent schedule of publicly reporting its metrics for tracking mental health issues.”  I expect that the AI-producing firms will increasingly do such things.  And more are in progress, such as “Leading AI company to ban kids from chatbots after lawsuit blames app for child’s death” (Bonny Chu, Fox Business, October 30th).  The firm here, Character.ai, which is “widely used for role-playing and creative storytelling with virtual characters,” said that “users under 18 will no longer be able to engage in open-ended conversations with its virtual companions starting Nov. 24.”  They will also restrict minors from having more than 2 daily hours of “chat time.”

In the October 29th New York Times, Anastasia Berg tried to show us “Why Even Basic A.I. Use Is So Bad for Students.”  Beyond academic cheating, “seemingly benign functions” such as AI-generated summaries, “are the most pernicious for developing minds,” as that stunts the meta-skill of being able to summarize things themselves.  Yet the piece contains its own refutation, as “Plato warned against writing,” since “literate human beings… would not use their memories.”  Technology, from 500 BC to 2025 AD, has always brought tradeoffs.  As calculators have made some arithmetic unnecessary but have hardly extinguished the need to know and use it, while people may indeed be weaker at summarizing formal material, they will continue to have no choice but to do that while living the rest of their lives.

We’re getting more legal action than that mentioned above, as “Lawsuits Blame ChatGPT for Suicides and Harmful Delusions” (Kashmir Hill, The New York Times, November 6th).  Seven cases were filed that day alone, three on behalf of users who killed themselves after extensive ChatGPT involvement, another with suicide plans, two with mental breakdowns, and one saying the software had encouraged him to be delusional.  As before, this company will need to ongoingly refine its safeguards, or it may not survive at all.                  

I end with another loud allegation, this one from Brian X. Chen, who told us, also in the November 6th New York Times, “How A.I. and Social Media Contribute to ‘Brain Rot.’”   He started noting that “using A.I.-generated summaries” got less specific information than through “traditional Google” searches, and continued to say that those who used “chatbots and A.I. search tools for tasks like writing essays and research” were “generally performing worse than people who don’t use them.”  All of that, though, when it means using AI as a substitute for personal work, is obvious, and not “brain rot.”  This article leaves open the question of whether the technology hurts when it is being used to help, not to write.

Three conclusions on the above jump out.  First, as AI progresses it will also bring along problems.  Second, legally and socially acceptable AI considerations are continuing to be defined and to evolve, and we’re nowhere near done yet.  Third, fears of adverse mental and cognitive effects from general use are, thus far, unsubstantiated.  Artificial intelligence will bring us a lot, both good and bad, and we will, most likely, excel at profiting from the former and stopping the latter.